Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
State-of-the-art (SOTA) weight-shared SuperNets dynamically activate subnetworks at runtime, enabling robust adaptive inference under varying deployment conditions. However, we find that adversaries can take advantage of the unique training and inference paradigms of SuperNets to selectively implant backdoors that activate only within specific subnetworks, remaining dormant across billions of other subnetworks. We present VillainNet (VNET), a novel poisoning methodology that restricts backdoor activation to attacker-chosen subnetworks, tailored either to specific operational scenarios (e.g., specific vehicle speeds or weather conditions) or to specific subnetwork configurations. VNET's core innovation is a novel, distance-aware optimization process that leverages architectural and computational similarity metrics between subnetworks to ensure that backdoor activation does not occur across non-target subnetworks. This forces defenders to confront a dramatically expanded search space for backdoor detection. We show that across two SOTA SuperNets, trained on the CIFAR10 and GTSRB datasets, VNET can achieve attack success rates comparable to traditional poisoning approaches (approximately 99%), while significantly lowering the chances of attack detection, thereby stealthily hiding the attack. Consequently, defenders face increased computational burdens, requiring on average 66 (and up to 250 for highly targeted attacks) sampled subnetworks to detect the attack, implying a roughly 66-fold increase in compute cost required to test the SuperNet for backdoors.more » « lessFree, publicly-accessible full text available November 19, 2026
-
Deep neural networks are increasingly required to operate across diverse hardware platforms, latency constraints, and power budgets, which motivates the need for specialized models for each scenario. However, designing and training a separate model per scenario or serving a large ensemble of models is often impractical. Weight sharing has emerged as a promising paradigm to address this challenge by training a single ''SuperNet'' that subsumes many sub-models (SubNets), and by reusing weights across those SubNets both at training and inference time. This paper provides an abridged survey of our recent advances that leverage weight sharing for efficient AI, covering both training and inference serving. In centralized once-for-all training, Delayed ε-Shrinking (DεS) improves training efficiency by strategically scheduling the introduction of smaller SubNets during training. In a federated fashion, SuperFedNas co-trains a SuperNet across distributed clients and disjoins training and searching, which enables oneshot specialization to many deployment targets at minimal cost. ∇QDARTS integrates quantization into differentiable architecture search, jointly finding neural architectures, weights, and low-precision settings to yield highly efficient models in a single search. For inference serving, SuperServe introduces a weight-shared model with dynamic SubNet routing (SubNetAct) to instantaneously switch among a spectrum of accuracy-latency operating points, coupled with a scheduler (SlackFit) for unpredictable workloads. Finally, SUSHI co-designs model, system, and accelerator to exploit weightshared SuperNets on tinyML devices, caching SubGraphs on FPGA to reduce latency and energy. Together, these works demonstrate that the weight sharing paradigm can dramatically improve the efficiency of both training and inference serving of deep models across a range of scenarios.more » « lessFree, publicly-accessible full text available August 4, 2026
-
Large Language Model (LLM) inference serving faces a fundamental challenge due to the distinct characteristics of its two phases: compute-intensive pre fill and memory-intensive decode. Existing scheduling strategies often prioritize one phase over the other, leading to a difficult tradeoff between system throughput and request latency. Prefill-prioritizing schedulers improve throughput but introduce significant latency jitter (generation stalls) by interfering with ongoing decodes. Conversely, decode-prioritizing schedulers maintain low latency but underutilize GPU resources, resulting in low throughput. This paper revisits the technique of chunked prefills, demonstrating its efficacy in mitigating this tradeoff. By splitting large prefill computations into smaller, manageable chunks and interleaving them with decode operations using stall-free batching, we can leverage the compute slack inherent in the decode phase. This approach significantly improves serving capacity under strict latency constraints, minimizes generation stalls, and reduces pipeline bubbles in distributed deployments, enabling efficient and responsive inference.more » « lessFree, publicly-accessible full text available August 4, 2026
-
The increasing deployment of ML models on the critical path of production applications requires ML inference serving systems to serve these models under unpredictable and bursty request arrival rates. Serving many models under such conditions requires a careful balance between each application's latency and accuracy requirements and the overall efficiency of utilization of scarce resources. Faced with this tension, state-of-the-art systems either choose a single model representing a static point in the latency-accuracy tradeoff space to serve all requests or incur latency target violations by loading specific models on the critical path of request serving. Our work instead resolves this tension through a resource-efficient serving of the entire range of models spanning the latency-accuracy tradeoff space. Our novel mechanism, SubNetAct, achieves this by carefully inserting specialized control-flow operators in pre-trained, weight-shared super-networks. These operators enable SubNetAct to dynamically route a request through the network to actuate a specific model that meets the request's latency and accuracy target. Thus, SubNetAct can serve a vastly higher number of models than prior systems while requiring upto 2.6\texttimes{} lower memory. More crucially, SubNetAct's near-instantaneous actuation of a wide-range of models unlocks the design space of fine-grained, reactive scheduling policies. We design one such extremely effective policy, SlackFit, and instantiate both SubNetAct and Slack-Fit in a real system, SuperServe. On real-world traces derived from a Microsoft workload, SuperServe achieves 4.67\% higher accuracy for the same latency targets and 2.85\texttimes{} higher latency target attainment for the same accuracy.more » « lessFree, publicly-accessible full text available April 28, 2026
-
Free, publicly-accessible full text available March 30, 2026
-
CNNs are increasingly deployed across different hardware, dynamic environments, and low-power embedded devices. This has led to the design and training of CNN architectures with the goal of maximizing accuracy subject to such variable deployment constraints. As the number of deployment scenarios grows, there is a need to find scalable solutions to design and train specialized CNNs. Once-for-all training has emerged as a scalable approach that jointly co-trains many models (subnets) at once with a constant training cost and finds specialized CNNs later. The scalability is achieved by training the full model and simultaneously reducing it to smaller subnets that share model weights (weight-shared shrinking). However, existing once-for-all training approaches incur huge training costs reaching 1200 GPU hours. We argue this is because they either start the process of shrinking the full model too early or too late. Hence, we propose Delayed Epsilon-Shrinking (DepS) that starts the process of shrinking the full model when it is partially trained, which leads to training cost improvement and better in-place knowledge distillation to smaller models. The proposed approach also consists of novel heuristics that dynamically adjust subnet learning rates incrementally, leading to improved weight-shared knowledge distillation from larger to smaller subnets as well. As a result, DepS outperforms state-of-the-art once-for-all training techniques across different datasets including CIFAR10/100, ImageNet-100, and ImageNet-1k on accuracy and cost. It achieves higher ImageNet-1k top1 accuracy or the same accuracy with 1.3x reduction in FLOPs and 2.5x drop in training cost (GPU*hrs).more » « less
-
The likelihood of encountering in-training failures rises substantially with larger Deep Learning (DL) training workloads, leading to lost work and resource wastage. Such failures are typically offset by checkpointing, which comes at the cost of storage and network bandwidth overhead. State-of-the-art approaches involve lossy model compression mechanisms, which induce a tradeoff between the resulting model quality and compression ratio. We make a key enabling observation that the sensitivity of model weights to compression varies during training, and different weights benefit from different quantization levels, ranging from retaining full precision to pruning. We propose (1) a non-uniform quantization scheme that leverages this variation, (2) an efficient search mechanism that dynamically finds the best quantization configurations, and (3) a quantization-aware delta compression mechanism that rearranges weights to minimize checkpoint differences and thereby improving compression. We instantiate these contributions in Inshrinkerator, an in-training checkpoint compression system for DL workloads. Our experiments show that Inshrinkerator consistently achieves a better tradeoff between accuracy and compression ratio compared to prior works, enabling a compression ratio up to 39x and withstanding up to 10 restores with negligible accuracy impact in fault-tolerant training. Inshrinkerator achieves at least an order of magnitude reduction in checkpoint size for failure recovery and transfer learning without any loss of accuracy.more » « less
-
Neural Architecture Search (NAS) for Federated Learning (FL) is an emerging field. It automates the design and training of Deep Neural Networks (DNNs) when data cannot be centralized due to privacy, communication costs, or regulatory restrictions. Recent federated NAS methods not only reduce manual effort but also help achieve higher accuracy than traditional FL methods like FedAvg. Despite the success, existing federated NAS methods still fall short in satisfying diverse deployment targets common in on-device inference including hardware, latency budgets, or variable battery levels. Most federated NAS methods search for only a limited range of neuro-architectural patterns, repeat them in a DNN, thereby restricting achievable performance. Moreover, these methods incur prohibitive training costs to satisfy deployment targets. They perform the training and search of DNN architectures repeatedly for each case. SuperFedNAS addresses these challenges by decoupling the training and search in federated NAS. SuperFedNAS co-trains a large number of diverse DNN architectures contained inside one supernet in the FL setting. Post-training, clients perform NAS locally to find specialized DNNs by extracting different parts of the trained supernet with no additional training. SuperFedNAS takes O(1) (instead of O(N)) cost to find specialized DNN architectures in FL for any N deployment targets. As part of SuperFedNAS, we introduce MaxNet—a novel FL training algorithm that performs multi-objective federated optimization of DNN architectures (≈5∗108) under different client data distributions. SuperFedNAS achieves upto 37.7\% higher accuracy or upto 8.13x reduction in MACs than existing federated NAS methods.more » « less
-
Deep neural networks, including the Transformer architecture, have achieved remarkable performance in various time series tasks. However, their effectiveness in handling clinical time series data is hindered by specific challenges: 1) Sparse event sequences collected asynchronously with multivariate time series, and 2) Limited availability of labeled data. To address these challenges, we propose Our code is available at https://github.com/SigmaTsing/TransEHR.git . , a self-supervised Transformer model designed to encode multi-sourced asynchronous sequential data, such as structured Electronic Health Records (EHRs), efficiently. We introduce three pretext tasks for pre-training the Transformer model, utilizing large amounts of unlabeled structured EHR data, followed by fine-tuning on downstream prediction tasks using the limited labeled data. Through extensive experiments on three real-world health datasets, we demonstrate that our model achieves state-of-the-art performance on benchmark clinical tasks, including in-hospital mortality classification, phenotyping, and length-of-stay prediction. Our findings highlight the efficacy of in effectively addressing the challenges associated with clinical time series data, thus contributing to advancements in healthcare analytics.more » « less
-
Tiny machine learning (TinyML) applications increasingly operate in dynamically changing deployment scenarios, requiring optimization for both accuracy and latency. Existing methods mainly target a single point in the accuracy/latency tradeoff space, which is insufficient as no single static point can be optimal under variable conditions. We draw on a recently proposed weight-shared SuperNet mechanism to enable serving a stream of queries that activates different SubNets within a SuperNet. This creates an opportunity to exploit the inherent temporal locality of different queries that use the same SuperNet. We propose a hardware–software co-design called SUSHI that introduces a novel SubGraph Stationary optimization. SUSHI consists of a novel field-programmable gate array implementation and a software scheduler that controls which SubNets to serve and which SubGraph to cache in real time. SUSHI yields up to a 32% improvement in latency, 0.98% increase in served accuracy, and achieves up to 78.7% off-chip energy saved across several neural network architectures.more » « less
An official website of the United States government

Full Text Available